6 research outputs found
Neural architectures for open-type relation argument extraction
In this work, we focus on the task of open-type relation argument extraction (ORAE): given a corpus, a query entity Q, and a knowledge base relation (e.g., “Q authored notable work with title X”), the model has to extract an argument of non-standard entity type (entities that cannot be extracted by a standard named entity tagger, for example, X: the title of a book or a work of art) from the corpus. We develop and compare a wide range of neural models for this task yielding large improvements over a strong baseline obtained with a neural question answering system. The impact of different sentence encoding architectures and answer extraction methods is systematically compared. An encoder based on gated recurrent units combined with a conditional random fields tagger yields the best results. We release a data set to train and evaluate ORAE, based on Wikidata and obtained by distant supervision
Generation of Radiology Findings in Chest X-Ray by Leveraging Collaborative Knowledge
Among all the sub-sections in a typical radiology report, the Clinical
Indications, Findings, and Impression often reflect important details about the
health status of a patient. The information included in Impression is also
often covered in Findings. While Findings and Impression can be deduced by
inspecting the image, Clinical Indications often require additional context.
The cognitive task of interpreting medical images remains the most critical and
often time-consuming step in the radiology workflow. Instead of generating an
end-to-end radiology report, in this paper, we focus on generating the Findings
from automated interpretation of medical images, specifically chest X-rays
(CXRs). Thus, this work focuses on reducing the workload of radiologists who
spend most of their time either writing or narrating the Findings. Unlike past
research, which addresses radiology report generation as a single-step image
captioning task, we have further taken into consideration the complexity of
interpreting CXR images and propose a two-step approach: (a) detecting the
regions with abnormalities in the image, and (b) generating relevant text for
regions with abnormalities by employing a generative large language model
(LLM). This two-step approach introduces a layer of interpretability and aligns
the framework with the systematic reasoning that radiologists use when
reviewing a CXR.Comment: Information Technology and Quantitative Management (ITQM 2023